ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92
BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000 synapses Properties of the brain It can learn, reorganize itself from experience It adapts to the environment It is robust and fault tolerant
WHAT IS AN ARTIFICIAL NEURON? Definition : Non linear, parameterized function with restricted output range w0 y y f w 0 n 1 i1 w i x i x1 x2 x3
ACTIVATION FUNCTIONS 20 18 16 14 12 10 8 6 Linear y x 4 2 0 0 2 4 6 8 10 12 14 16 18 20 2 1.5 1 0.5 0-0.5-1 -1.5 Logistic 1 y 1 exp( x) -2-10 -8-6 -4-2 0 2 4 6 8 10 2 1.5 1 0.5 0-0.5-1 -1.5 Hyperbolic tangent exp( x) exp( x) y exp( x) exp( x) -2-10 -8-6 -4-2 0 2 4 6 8 10
NEURAL NETWORKS A mathematical model to solve engineering problems Group of highly connected neurons to realize compositions of non linear functions Tasks Classification Discrimination Estimation 2 types of networks Feed forward Neural Networks Recurrent Neural Networks
FEED FORWARD NEURAL NETWORKS The information is propagated from the inputs to the outputs Output layer Time has no role (NO cycle between outputs and inputs) 2nd hidden layer 1st hidden layer x1 x2.. xn
RECURRENT NEURAL NETWORKS Can have arbitrary topologies Can model systems with internal states (dynamic ones) Delays are associated to a 0 specific 1 weight Training 0 is more difficult Performance may be problematic 1 0 Stable Outputs may be more difficult to evaluate 0 Unexpected 0behavior 1 (oscillation, chaos, ) x1 x2
LEARNING The procedure that consists in estimating the parameters of neurons so that the whole network can perform a specific task 2 types of learning The supervised learning The unsupervised learning The Learning process (supervised) Present the network a number of inputs and their corresponding outputs See how closely the actual outputs match the desired ones Modify the parameters to better approximate the desired outputs
SUPERVISED LEARNING The desired response of the neural network in function of particular inputs is well known. A Professor may provide examples and teach the neural network how to fulfill a certain task
UNSUPERVISED LEARNING Idea : group typical input data in function of resemblance criteria un-known a priori Data clustering No need of a professor The network finds itself the correlations between the data Examples of such networks : Kohonen feature maps
PROPERTIES OF NEURAL NETWORKS Supervised networks are universal approximators (Non recurrent networks) Theorem : Any limited function can be approximated by a neural network with a finite number of hidden neurons to an arbitrary precision Type of Approximators Non-Linear approximators (NN): for a given precision, the number of parameters grows exponentially with the number of variables (polynomials) Linear approximators: the number of parameters grows linearly with the number of variables
OTHER PROPERTIES Adaptivity Adapt weights to environment and retrained easily Generalization ability May provide against lack of data Fault tolerance Graceful degradation of performances if damaged => The information is distributed within the entire net.
Not an approximation but a fitting problem Regression function Approximation of the regression function : Estimate the more probable value of yp for a given input x Cost function: N 1 2 ( ) k k J w y p( x ) g( x, w) 2 Goal: Minimize the cost function by determining the right function g k1
EXAMPLE
CLASSIFICATION (DISCRIMINATION) Class obects in defined categories Rough decision Estimation of the probability for a certain obect to belong to a specific class Example : Data mining Applications : Economy, speech and patterns recognition, sociology, etc.
EXAMPLE Examples of handwritten postal codes drawn from a database available from the US Postal service
WHAT DO WE NEED TO USE NN? Determination of pertinent inputs Collection of data for the learning and testing phase of the neural network Finding the optimum number of hidden nodes Estimate the parameters (Learning) Evaluate the performances of the network IF performances are not satisfactory then review all the precedent points
CLASSICAL NEURAL ARCHITECTURES Perceptron Multi-Layer Perceptron Radial Basis Function (RBF) Kohonen Features maps Other architectures An example : Shared weights neural networks
PERCEPTRON Rosenblatt (1962) Linear separation Inputs :Vector of real values Outputs :1 or -1 y sign(v) + + + + + + + + + + + + + c0 c1x1 c2x2 + + + + + + + + + + + + + + + + 1 + + + y 0 y 1 c 0 1 v c c c1 2 x 1 0 c1x1 c2x2 x 2
The perceptron algorithm converges if examples are linearly separable
MULTI-LAYER PERCEPTRON Output layer 2nd hidden layer One or more hidden layers Sigmoid activations functions 1st hidden layer Input data
LEARNING net o w E i f 1 2 ( t w ( t 0 net o Back-propagation algorithm o i ) n E w E o i o net )² w i o f '( net i E net E o E o ) ( t net w i f ( net ) o o ) i Credit assignment E net If the th node is an output unit
) ( 1) ( ) ( 1) ( ) ( ) ( ) ( ) ( ' t w t w t w t w t o t t w w net f w o net net E o E i i i i i i k k k k k k k Momentum term to smooth The weight changes over time
DIFFERENT NON LINEARLY SEPARABLE PROBLEMS Structure Types of Decision Regions Exclusive-OR Problem Classes with Meshed regions Most General Region Shapes Single-Layer Half Plane Bounded By Hyperplane A B B A B A Two-Layer Convex Open Or Closed Regions A B B A B A Three-Layer Abitrary (Complexity Limited by No. of Nodes) A B B A B A Neural Networks An Introduction Dr. Andrew Hunter
RADIAL BASIS FUNCTIONS (RBFS) Features One hidden layer The activation of a hidden unit is determined by the distance between the input vector and a prototype vector Outputs Radial units Inputs
RBF hidden layer units have a receptive field which has a centre Generally, the hidden unit function is Gaussian The output Layer is linear Realized function K c x W x s 1 ) ( 2 exp c x c x
LEARNING The training is performed by deciding on How many hidden nodes there should be The centers and the sharpness of the Gaussians 2 steps In the 1st stage, the input data set is used to determine the parameters of the basis functions In the 2nd stage, functions are kept fixed while the second layer weights are estimated ( Simple BP algorithm like for MLPs)
MLPS VERSUS RBFS Classification MLPs separate classes via hyperplanes RBFs separate classes via hyperspheres Learning MLPs use distributed learning RBFs use localized learning RBFs train faster Structure MLPs have one or more hidden layers RBFs have only one layer RBFs require more hidden neurons => curse of dimensionality X 2 X 2 X 1 X 1 MLP RBF
SELF ORGANIZING MAPS The purpose of SOM is to map a multidimensional input space onto a topology preserving map of neurons Preserve a topological so that neighboring neurons respond to «similar»input patterns The topological structure is often a 2 or 3 dimensional space Each neuron is assigned a weight vector with the same dimensionality of the input space Input patterns are compared to each weight vector and the closest wins (Euclidean Distance)
The activation of the neuron is spread in its direct neighborhood =>neighbors become sensitive to the same input patterns Block distance The size of the neighborhood is initially large but reduce over time => Specialization of the network 2nd neighborhood First neighborhood
ADAPTATION During training, the winner neuron and its neighborhood adapts to make their weight vector more similar to the input pattern that caused the activation The neurons are moved closer to the input pattern The magnitude of the adaptation is controlled via a learning parameter which decays over time
SHARED WEIGHTS NEURAL NETWORKS: TIME DELAY NEURAL NETWORKS (TDNNS) Introduced by Waibel in 1989 Properties Local, shift invariant feature extraction Notion of receptive fields combining local information into more abstract patterns at a higher level Weight sharing concept (All neurons in a feature share the same weights) All neurons detect the same feature but in different position Principal Applications Speech recognition Image analysis
TDNNS (CONT D) Obects recognition in an image Each hidden unit receive inputs only from a small region of the input space : receptive field Shared weights for all receptive fields => translation invariance in the response of the network Hidden Layer 2 Hidden Layer 1 Inputs
Advantages Reduced number of weights Require fewer examples in the training set Faster learning Invariance under time or space translation Faster execution of the net (in comparison of full connected MLP)
NEURAL NETWORKS (APPLICATIONS) Security Face recognition Time series prediction Process identification Process control Optical character recognition Adaptative filtering Etc
CONCLUSION ON NEURAL NETWORKS Neural networks are utilized as statistical tools Adust non linear functions to fulfill a task Need of multiple and representative examples but fewer than in other methods Neural networks enable to model complex static phenomena as well as dynamic ones NN are good classifiers and are used in security tasks. The use of NN needs a good comprehension of the problem
THANK YOU!